50 research outputs found

    Multi-view monocular pose estimation for spacecraft relative navigation

    Get PDF
    This paper presents a method of estimating the pose of a non-cooperative target for spacecraft rendezvous applications employing exclusively a monocular camera and a threedimensional model of the target. This model is used to build an offline database of prerendered keyframes with known poses. An online stage solves the model-to-image registration problem by matching two-dimensional point and edge features from the camera to the database. We apply our method to retrieve the motion of the now inoperational satellite ENVISAT. The combination of both feature types is shown to produce a robust pose solution even for large displacements respective to the keyframes which does not rely on real-time rendering, making it attractive for autonomous systems applications

    Histogram of distances for local surface description

    Get PDF
    3D object recognition is proven superior compared to its 2D counterpart with numerous implementations, making it a current research topic. Local based proposals specifically, although being quite accurate, they limit their performance on the stability of their local reference frame or axis (LRF/A) on which the descriptors are defined. Additionally, extra processing time is demanded to estimate the LRF for each local patch. We propose a 3D descriptor which overrides the necessity of a LRF/A reducing dramatically processing time needed. In addition robustness to high levels of noise and non-uniform subsampling is achieved. Our approach, namely Histogram of Distances is based on multiple L2-norm metrics of local patches providing a simple and fast to compute descriptor suitable for time-critical applications. Evaluation on both high and low quality popular point clouds showed its promising performance

    Autonomous navigation for mobility scooters: a complete framework based on open-source software

    Get PDF
    In recent years, there has been a growing demand for small vehicles targeted at users with mobility restrictions and designed to operate on pedestrian areas. The users of these vehicles are generally required to be in control for the entire duration of their journey, but a lot more people could benefit from them if some of the driving tasks could be automated. In this scenario, we set out to develop an autonomous mobility scooter, with the aim to understand the commercial feasibility of a similar product. This paper reports on the progress of this project, proposing a framework for autonomous navigation on pedestrian areas, and focusing in particular on the construction of suitable costmaps. The proposed framework is based on open-source software, including a library created by the authors for the generation of costmaps

    Real-time multiview data fusion for object tracking with RGBD sensors

    Get PDF
    This paper presents a new approach to accurately track a moving vehicle with a multiview setup of red-green-blue depth (RGBD) cameras. We first propose a correction method to eliminate a shift, which occurs in depth sensors when they become worn. This issue could not be otherwise corrected with the ordinary calibration procedure. Next, we present a sensor-wise filtering system to correct for an unknown vehicle motion. A data fusion algorithm is then used to optimally merge the sensor-wise estimated trajectories. We implement most parts of our solution in the graphic processor. Hence, the whole system is able to operate at up to 25 frames per second with a configuration of five cameras. Test results show the accuracy we achieved and the robustness of our solution to overcome uncertainties in the measurements and the modelling

    Robust Adversarial Attacks Detection for Deep Learning based Relative Pose Estimation for Space Rendezvous

    Full text link
    Research on developing deep learning techniques for autonomous spacecraft relative navigation challenges is continuously growing in recent years. Adopting those techniques offers enhanced performance. However, such approaches also introduce heightened apprehensions regarding the trustability and security of such deep learning methods through their susceptibility to adversarial attacks. In this work, we propose a novel approach for adversarial attack detection for deep neural network-based relative pose estimation schemes based on the explainability concept. We develop for an orbital rendezvous scenario an innovative relative pose estimation technique adopting our proposed Convolutional Neural Network (CNN), which takes an image from the chaser's onboard camera and outputs accurately the target's relative position and rotation. We perturb seamlessly the input images using adversarial attacks that are generated by the Fast Gradient Sign Method (FGSM). The adversarial attack detector is then built based on a Long Short Term Memory (LSTM) network which takes the explainability measure namely SHapley Value from the CNN-based pose estimator and flags the detection of adversarial attacks when acting. Simulation results show that the proposed adversarial attack detector achieves a detection accuracy of 99.21%. Both the deep relative pose estimator and adversarial attack detector are then tested on real data captured from our laboratory-designed setup. The experimental results from our laboratory-designed setup demonstrate that the proposed adversarial attack detector achieves an average detection accuracy of 96.29%

    Visualisation de nuages de points 3D acquis par un laser : impact de la vision stĂ©rĂ©oscopique et du suivi de la tĂȘte sur une recherche de cible

    No full text
    International audienceIn military context, we study the visualization of 3D point clouds sense by a drone flying around a battlefield area equipped with a 3D laser. We propose a user experiment protocol in order to study the effect of stereo and head tracking rendering on a target research task with a virtual reality headset. Our first results show a clear advantage when stereo and head tracking rendering are activated.Nous étudions dans un contexte militaire la visualisation de nuages de points 3D acquis par un laser embarqué sur un drone tournant autour d’une zone de combat. Nous proposons un protocole expérimental permettant d’évaluer l’impact de la vision stéréoscopique et du suivi de la tête sur une recherche de cible avec un casque de réalité virtuelle. Les premiers résultats montrent un avantage lorsque la vision stéréoscopique et le suivi de la tête sont activés
    corecore